32 research outputs found

    Hybrid Encryption in the Multi-User Setting

    Get PDF
    This paper presents an attack in the multi-user setting on various public-key encryption schemes standardized in IEEE 1363a, SECG SEC 1 and ISO 18033-2. The multi-user setting is a security model proposed by Bellare et al., which allows adversaries to simultaneously attack multiple ciphertexts created by one or more users. An attack is considered successful if the attacker learns information about any of the plaintexts. We show that many standardized public-key encryption schemes are vulnerable in this model, and give ways to prevent the attack. We also show that the key derivation function and pseudorandom generator used to implement a hybrid encryption scheme must be secure in the multi-user setting, in order for the overall primitive to be secure in the multi-user setting. As an illustration of the former, we show that using HKDF (as standardized in NIST SP 800-56C) as a key derivation function for certain standardized hybrid public-key encryption schemes is insecure in the multi-user setting

    Short One-Time Signatures

    Get PDF
    We present a new one-time signature scheme having short signatures. Our new scheme supports aggregation, batch verification, and admits efficient proofs of knowledge. It has a fast signing algorithm, requiring only modular additions, and its verification cost is comparable to ECDSA verification. These properties make our scheme suitable for applications on resource-constrained devices such as smart cards and sensor nodes. Along the way, we give a unified description of five previous one-time signature schemes and improve parameter selection for these schemes, and as a corollary we give a fail-stop signature scheme with short signatures

    Cycles of Police Reform in Latin America.

    Get PDF
    yesOver the last quarter century post-conflict and post-authoritarian transitions in Latin America have been accompanied by a surge in social violence, acquisitive crime, and insecurity. These phenomena have been driven by an expanding international narcotics trade, by the long-term effects of civil war and counter-insurgency (resulting in, inter alia, an increased availability of small arms and a pervasive grammar of violence), and by structural stresses on society (unemployment, hyper-inflation, widening income inequality). Local police forces proved to be generally ineffective in preventing, resolving, or detecting such crime and forms of “new violence”3 due to corruption, frequent complicity in criminal networks, poor training and low pay, and the routine use of excessive force without due sanction. Why, then, have governments been slow to prioritize police reform and why have reform efforts borne largely “limited or nonexistent” long-term results? This chapter highlights a number of lessons suggested by various efforts to reform the police in Latin America over the period 1995-2010 . It focuses on two clusters of countries in Latin America. One is Brazil and the Southern Cone countries (Chile, Argentina, and Uruguay), which made the transition to democracy from prolonged military authoritarian rule in the mid- to late 1980s. The other is Central America and the Andean region (principally El Salvador, Guatemala, Honduras, Peru, and Colombia), which emerged/have been emerging from armed conflict since the mid- 1990s. The chapter examines first the long history of international involvement in police and security sector reform in order to identify long-run tropes and path dependencies. It then focuses on a number of recurring themes: cycles of de- and re-militarization of the policing function; the “security gap” and “democratization dilemmas” involved in structural reforms; the opportunities offered by decentralization for more community-oriented police; and police capacity to resist reform and undermine accountability mechanisms

    Fast relational learning using bottom clause propositionalization with artificial neural networks

    Get PDF
    Relational learning can be described as the task of learning first-order logic rules from examples. It has enabled a number of new machine learning applications, e.g. graph mining and link analysis. Inductive Logic Programming (ILP) performs relational learning either directly by manipulating first-order rules or through propositionalization, which translates the relational task into an attribute-value learning task by representing subsets of relations as features. In this paper, we introduce a fast method and system for relational learning based on a novel propositionalization called Bottom Clause Propositionalization (BCP). Bottom clauses are boundaries in the hypothesis search space used by ILP systems Progol and Aleph. Bottom clauses carry semantic meaning and can be mapped directly onto numerical vectors, simplifying the feature extraction process. We have integrated BCP with a well-known neural-symbolic system, C-IL2P, to perform learning from numerical vectors. C-IL2P uses background knowledge in the form of propositional logic programs to build a neural network. The integrated system, which we call CILP++, handles first-order logic knowledge and is available for download from Sourceforge. We have evaluated CILP++ on seven ILP datasets, comparing results with Aleph and a well-known propositionalization method, RSD. The results show that CILP++ can achieve accuracy comparable to Aleph, while being generally faster, BCP achieved statistically significant improvement in accuracy in comparison with RSD when running with a neural network, but BCP and RSD perform similarly when running with C4.5. We have also extended CILP++ to include a statistical feature selection method, mRMR, with preliminary results indicating that a reduction of more than 90 % of features can be achieved with a small loss of accuracy

    Symmetric-key Corruption Detection : When XOR-MACs Meet Combinatorial Group Testing

    Get PDF
    We study a class of MACs, which we call corruption detectable MAC, that is able to not only check the integrity of the whole message, but also detect a part of the message that is corrupted. It can be seen as an application of the classical Combinatorial Group Testing (CGT) to message authentication. However, previous work on this application has inherent limitation in communication. We present a novel approach to combine CGT and a class of linear MACs (XOR-MAC) that enables to break this limit. Our proposal, XOR-GTM, has a significantly smaller communication cost than any of the previous ones, keeping the same corruption detection capability. Our numerical examples for storage application show a reduction of communication by a factor of around 15 to 70 compared with previous schemes. XOR-GTM is parallelizable and is as efficient as standard MACs. We prove that XOR-GTM is provably secure under the standard pseudorandomness assumptions

    A Knowledge-Based Perspective of the Distributed Design of Object Oriented Databases

    No full text
    The performance of applications on Object Oriented Database Management Systems (OODBMSs) is strongly affected by Distributed Design, which reduces irrelevant data accessed by applications and data exchange among sites. In an OO environment, the Distributed Design is a very complex task, and an open research problem. In this work we propose a knowledge based approach to the fragmentation phase of the distributed design of object oriented databases. In this approach, we will show a rule-based implementation of an analysis algorithm from our previous work and propose some ideas towards the use of Inductive Logic Programming (ILP) to perform a knowledge discovery/revision process using our set of rules as background knowledge. The objective of the work is to uncover some previously unknown issues to be considered in the distributed design process. Our main objective here is to show the viability of performing a revision process in order to obtain better and better fragmentation algorithms...
    corecore